Thursday, January 02, 2025

When $100 Million Technology Projects Fail, It’s the Board’s Fault—Every Single Time

When $100 Million Technology Projects Fail, It’s the Board’s Fault—Every Single Time

In Switzerland, rumors suggest that both Bank Julius Bär and Raiffeisen Schweiz are grappling with failed technology projects, each costing over $100 million so far. Bank Julius Bär is reportedly trying to replace its existing core banking system for the Swiss booking center with Temenos, while Raiffeisen Schweiz is attempting to build a modern e-banking app.  

Both organizations have allegedly hired third parties to review what went wrong and determine who’s to blame. While learning from failure and engaging external reviewers is sensible, the question of blame should already be crystal clear.  

When a multi-million-dollar technology project collapses under its own weight—costing shareholders, employees, and stakeholders dearly—there’s no escaping the brutal truth: the fault lies squarely with the board.  

You can explore my 20+ technology project failure case studies. Without exception, the boards involved failed to fulfill their responsibilities.  

The Board’s Job Is Oversight, Not Rubber-Stamping

Boards exist to govern. They approve strategy, allocate resources, and oversee risks. They are not passive observers—they are active stewards of an organization’s success. Yet in failed projects, it’s evident that many boards sleepwalk through their responsibilities. They fail to ask tough questions early, challenge overly optimistic assumptions, or ensure mechanisms are in place to detect and address problems before it’s too late.  

A board’s oversight role is not ceremonial. If a project spirals into disaster, the board either ignored the warning signs, delegated oversight to those ill-equipped for the job, or worse, never bothered to establish adequate checks in the first place.  

If a board lacks the expertise to fulfill its duties, it must seek external help. This could mean forming an advisory board with independent specialists or adding a temporary board member with the requisite expertise and experience. 

Failing Is Acceptable; Failing Late Is Not 

Failure is a natural part of innovation and growth. No board can eliminate risk entirely—nor should they try. But there’s a monumental difference between failing fast and failing late.  

Early failure allows a company to pivot, salvage resources, and preserve credibility. Late failure, on the other hand, is catastrophic. It burns cash, destroys morale, and erodes stakeholder trust.  

Boards must demand stage-gated project governance that clearly delineates when to proceed, pivot, or pull the plug. If a multi-million-dollar project reaches the point of no return before its inevitable demise, the board has failed in its primary responsibility—to safeguard the organization from reckless escalation.  

Why Boards Get It Wrong

So why do boards allow projects to go off the rails? Common reasons include:  

> Blind Faith in Leadership: Boards often rely too heavily on the CEO or project sponsor’s assurances. Trust is important, but blind faith is a recipe for disaster. A board’s role is to verify, not just trust.  

> Lack of Expertise: Some boards lack the technical or industry-specific knowledge to challenge assumptions. Instead of addressing this gap, they defer to management, undermining their oversight role.  

> Cognitive Biases: Boards are just as susceptible to biases as anyone else. The sunk cost fallacy, groupthink, and overconfidence often lead boards to double down on failing projects instead of cutting losses.  

> Weak Governance Processes: Many boards fail to establish robust governance frameworks for major projects. Without clear accountability, transparency, and regular checkpoints, projects are allowed to drift toward failure.  

The Path to Accountability  

To prevent future multi-million-dollar disasters, boards must:  

> Ask Hard Questions Early: Why are we doing this? What are the critical assumptions? What would make us stop? These questions must be asked before a single dollar is spent.  

> Insist on Independent Assurance: Boards should mandate independent audits and reviews for major projects. An objective view can often identify risks that insiders miss.  

> Monitor Progress Ruthlessly: Quarterly updates are not enough. Boards must demand real-time reporting on key metrics and intervene when milestones are missed.  

> Be Willing to Pull the Plug: The hardest decision for any board is to stop a failing project. But it’s also the most responsible one. Better to write off millions now than to lose billions later.  

In a Nutshell

When a multi-million-dollar project fails, the board cannot claim ignorance or absolve itself of responsibility. Failure at this scale is a governance failure, plain and simple. Boards that tolerate late-stage disasters are not just failing the organization—they’re failing every stakeholder who placed their trust in them.  

The lesson is simple: you can fail, but not that late. Boards must act as the last line of defense, ensuring that failure—when it happens—is swift, contained, and instructive. Anything less is negligence.  

If you or your company needs help with Board Advisory around technology and project governance or an Independent Project Review, just contact me.

Read more…

Monday, December 16, 2024

10 Essential Questions Every Board Should Ask About Technology

10 Essential Questions Every Board Should Ask About Technology

Board members play an important role in steering organizations through the complexities of technology initiatives. 

To fulfil this role effectively, it's essential to ask the right questions that probe the strategic, operational, and risk aspects of technology projects. 

Here are ten critical questions every board should consider:

1) How does this technology initiative align with our strategic goals?

Understanding the connection between technology projects and the organization's mission ensures that investments support long-term objectives. 

See "Do Your Projects and Initiatives Support Your Strategy?" for a detailed explanation on how to do this.

2) What are the expected benefits and how will we measure success?

Clarify the anticipated outcomes and establish metrics to assess the project's impact on the organization. 

See the "Project Success Model" for a detailed explanation on how to do this.

3) What are the major risks and how are we mitigating them?

Identify potential challenges, including cybersecurity threats and compliance issues, and evaluate the strategies in place to address them.

See "Proactively Manage Your Project With RAID Lists (Risks, Assumptions, Issues and Decisions)" for a detailed explanation on how to do this.

4) How are we ensuring data security and privacy?

Assess the measures implemented to protect sensitive information and comply with relevant regulations.

5) What is the Total Cost of Ownership?

Consider not only initial expenses but also ongoing costs related to maintenance, training, and upgrades.

See "What Are the Real Costs of Your Technology Project?" for a detailed explanation on how to do this.

6) How will this impact our organizational culture and workforce?

Evaluate how the initiative will affect employees and what change management strategies are in place to support them.

See "User Enablement is Critical for Project Success" and "If You Want to Bring Change in Your Organization …" for further reading on these topics.

7) Are we leveraging the right expertise?

Determine whether internal capabilities are sufficient or if external consultants are necessary to achieve project success.

8) What is the implementation timeline and are we on track?

Review the project's timeline, milestones, and any deviations to ensure timely delivery.

See "A Great Leading Indicator for Future Trouble - Missing Milestones" for further reading on these topics.

9) How does this position us against competitors?

Analyze how the technology will enhance the organization's competitive edge in the market.

10) What is our plan for post-implementation evaluation?

Establish a process for assessing the project's success and lessons learned after completion.

See the "Project Success Model" for a detailed explanation on how to do this.

In a nutshell: By consistently addressing these questions, board members can provide informed oversight, ensuring that technology initiatives are strategically sound and effectively managed.

If you or your company needs help with Technology Due Diligence, Business Case Review or Board Advisory just contact me.

Read more…

Saturday, December 07, 2024

Case Study 20: The $4 Billion AI Failure of IBM Watson for Oncology

Case Study 20: The $4 Billion AI Failure of IBM Watson for Oncology
In 2011, IBM’s Watson took the world by storm when it won the television game show Jeopardy!, showcasing the power of artificial intelligence (AI). Emboldened by this success, IBM sought to extend Watson’s capabilities beyond trivia to address real-world challenges. 

Healthcare, with its complex data and critical decision-making needs, became a primary focus. Among its flagship initiatives was Watson for Oncology, a system designed to assist doctors in diagnosing and treating cancer through AI-driven insights.

Cancer treatment represents one of the most intricate and rapidly evolving domains in medicine. With over 18 million new cases diagnosed globally each year, oncologists face an overwhelming amount of medical literature, treatment protocols, and emerging research. Watson for Oncology aimed to address this challenge by analyzing vast amounts of data to recommend evidence-based treatment plans, all in a matter of seconds.

IBM marketed Watson for Oncology as a revolutionary tool that could bridge the gap between cutting-edge research and clinical practice. Its promise was to assist oncologists in identifying personalized treatment options for patients, thereby improving outcomes and reducing variability in care. 

However, this ambitious vision quickly collided with the complex realities of cancer care, resulting in widespread criticism and eventual failure.

Background

At the start of the project it had five lofty objectives;

1) Streamlining Clinical Decision-Making: Watson for Oncology aimed to provide oncologists with AI-generated insights, synthesizing vast amounts of data into actionable treatment recommendations.

2) Bridging Knowledge Gaps: With the rapid pace of medical advancements, Watson sought to keep clinicians updated on the latest evidence, clinical trials, and treatment protocols.

3) Improving Patient Outcomes: The system was designed to support personalized care by tailoring treatment recommendations to each patient’s unique genetic and clinical profile.

4) Expanding Access to Expertise: IBM envisioned Watson as a tool to democratize high-quality oncology care, particularly in resource-limited settings where access to specialists is constrained.

5) Establishing Market Leadership: Beyond healthcare, IBM sought to position Watson as a leader in AI applications, demonstrating the transformative potential of cognitive computing.

The project was supported by partnerships with leading institutions like Memorial Sloan Kettering Cancer Center (MSKCC) to train Watson for Oncology.

The partnership aimed to imbue Watson with the expertise of MSKCC’s oncologists by feeding it clinical guidelines, peer-reviewed literature, and patient case histories. The AI would then analyze patient records and suggest ranked treatment options based on the latest evidence. It was envisioned as a tool that could augment oncologists' expertise, particularly in under-resourced settings.

IBM invested heavily in the project, pouring billions into Watson Health, which encompassed Watson for Oncology. The company acquired several firms specializing in healthcare data and analytics, including Truven Health Analytics and Merge Healthcare. These acquisitions were meant to enhance Watson’s capabilities by providing access to large datasets and advanced imaging tools.

Initial trials and pilots were conducted in countries like India and China, where disparities in healthcare resources presented an opportunity for Watson to make a meaningful impact. However, reports soon emerged that the AI’s recommendations were often inconsistent with local clinical practices. For example, Watson’s reliance on U.S.-centric guidelines made it difficult to implement in regions with differing treatment standards or drug availability.

By 2018, skepticism was growing. High-profile reports detailed instances where Watson provided inappropriate or even unsafe recommendations. These challenges, coupled with declining revenues for IBM Watson Health, culminated in the program’s discontinuation in 2023.

This case study examines how a project with such potential faltered, offering lessons for future ventures at the intersection of AI and healthcare.

Don’t let your project fail like this one!

Discover here how I can help you turn it into a success.

For a list of all my project failure case studies just click here.

Timeline of Events

2011–2012: Watson's Post-Jeopardy! Evolution

Following its Jeopardy! success, IBM began exploring commercial applications for Watson, identifying healthcare as a priority. In 2012, IBM partnered with Memorial Sloan Kettering to develop Watson for Oncology, marking the start of an ambitious initiative.

2013: Initial Development and Training

Watson’s training began with curated data from MSKCC, including clinical guidelines and research publications. Early feedback highlighted challenges in teaching the system to interpret ambiguous or contradictory medical information.

2014: Pilot Testing at Memorial Sloan Kettering

MSKCC oncologists started testing Watson on hypothetical patient cases. Early results revealed gaps in the system’s knowledge and its tendency to offer impractical or unsafe recommendations, raising concerns about its readiness.

2015: Launch and Early Adoption

IBM officially launched Watson for Oncology with aggressive marketing campaigns. Hospitals in countries like Thailand, India, and South Korea signed adoption agreements, drawn by the promise of bringing world-class cancer care to underserved regions.

2016: Growing Skepticism Among Oncologists

Reports emerged of dissatisfaction among oncologists using Watson. Many found the system’s recommendations simplistic, biased toward MSKCC practices, and poorly adapted to local guidelines.

2017: Critical Media Coverage

Investigative reports revealed that some of Watson’s recommendations were based on hypothetical scenarios rather than real-world data. These revelations damaged IBM’s credibility and raised ethical questions about its marketing claims.

2018: Customer Contracts Cancelled

Major clients, including MD Anderson Cancer Center, ended their contracts with IBM, citing high costs and underwhelming results. IBM began scaling back its marketing efforts for Watson for Oncology.

2019: Internal Restructuring at IBM Watson Health

Facing declining revenues, IBM restructured its Watson Health division. Resources were redirected to other AI projects, and development on Watson for Oncology slowed significantly.

2021: Watson Health Division Sold

IBM announced the sale of its Watson Health assets to a private equity firm, effectively marking the end of its ambitions in AI-driven cancer care.

2023: Retrospective Studies Highlighting System Flaws

Postmortem analyses identified systemic issues, including poor data quality, inadequate clinical validation, and unrealistic timelines, as key factors in the project’s failure.

What Went Wrong?

Overreliance on Limited Training Data: Watson’s knowledge base was heavily influenced by MSKCC’s practices, leading to recommendations that often failed to align with local guidelines or real-world cases. This lack of diversity in training data undermined the system’s global applicability.

Unrealistic Marketing Claims: IBM’s aggressive marketing exaggerated Watson’s capabilities, creating unrealistic expectations among customers. When the system failed to deliver, trust eroded quickly.

Inadequate Physician Involvement: Oncologists reported that Watson’s interface was not user-friendly and often disrupted their workflow. Limited engagement with end-users during development contributed to these usability issues.

Lack of Adaptability to Local Contexts: Watson struggled to accommodate variations in healthcare systems, resource availability, and cultural practices. This rigidity limited its usefulness in diverse settings.

Ethical and Transparency Concerns: IBM’s use of hypothetical cases and selective data to demonstrate Watson’s capabilities raised ethical red flags. Customers felt misled by the lack of transparency.

How IBM Could Have Done Things Differently?

Broader and More Diverse Training Data: IBM could have partnered with multiple institutions worldwide to train Watson on a broader dataset, ensuring recommendations were evidence-based and applicable in varied contexts.

Iterative Development with Physician Feedback: By involving more oncologists in the design and testing process, IBM could have identified and resolved usability issues early on, ensuring the system met clinical needs.

Transparent Communication of Capabilities: IBM should have been more transparent about Watson’s limitations, focusing on incremental benefits rather than overhyping its transformative potential.

Emphasis on Local Adaptability: Developing a system that could integrate local guidelines and resource constraints would have made Watson more practical for global deployment.

Strengthened Ethical Oversight: IBM could have established an independent advisory board to review marketing claims, data usage, and clinical validation processes, building trust with stakeholders.

Closing Thoughts

The failure of IBM Watson for Oncology offers valuable lessons for AI projects in healthcare and beyond. It highlights the importance of realistic expectations, rigorous validation, and end-user involvement in developing and deploying AI solutions. 

While IBM’s vision was ambitious, its execution fell short, underscoring the challenges of applying AI in complex, high-stakes domains. Moving forward, the healthcare industry must balance optimism about AI’s potential with a commitment to patient safety and ethical responsibility.

Don’t let your project fail like this one!

Discover here how I can help you turn it into a success.

For a list of all my project failure case studies just click here.

Sources

> IBM official press releases (2011–2021).

> Investigative reports from Stat News and The Wall Street Journal on Watson for Oncology’s challenges.

> Interviews with Memorial Sloan Kettering oncologists published in medical journals.

> Retrospective analyses in The Lancet Digital Health and JAMA Oncology.

> Public statements by IBM executives, including John Kelly III (SVP, IBM Research).

Read more…

Thursday, December 05, 2024

My Talk "Technology Due Diligence" @ Business Angels Switzerland

My Talk Artificial Intelligence @ Business Angels Switzerland

Last week I was invited by Business Angels Switzerland to give a talk at their Academy about Technology Due Diligence.

The goal of the talk was to enable investors to determine the value and risks of technology in a startup or scaleup.

Many corporate executives share this challenge with startup investors. 

Technology Due Diligence (TDD) is an essential part of any merger, acquisition, or IPO. And while the scope of a TDD in such an event is of course different than the scope of a TDD in context of a Series A Investment Round there are also many similarities.

That is why I wanted to share the recording of this talk with you. 

You will find the video below or here

A copy of the slides you can download here.

If you or your company needs help with a Technology Due Diligence just contact me.

And if you think I would be a good fit for speaking at your organisation have a look at my speaking page.

What others are saying ...

Henrico can simplify any AI/blockchain/IT topic, summarize it and extract the essence for investors to make proper technical due diligence. Well done! - Managing Director @ Reinhart Capital
Henrico Dolfing is a powerhouse in the startup world and a cornerstone of the Business Angels Switzerland community. As the mastermind behind multiple fascinating BAS Academies, he’s shaping how we think about investing. Add to that his sharp due diligence skills and engaging personality, and you’ve got someone who truly stands out. Grateful to have Henrico driving investments and being a great Business Angel. To top it all off, Henrico is a great speaker who brings clarity, insight, and a spark of wit to every presentation. - General Manager @ Business Angels Switzerland

Read more…

Tuesday, August 20, 2024

Case Study 19: The $20 Billion Boeing 737 Max Disaster That Shook Aviation

Case Study 19: The $20 Billion Boeing 737 Max Disaster That Shook Aviation

The Boeing 737 Max, once heralded as a triumph in aviation technology and efficiency, has since become synonymous with one of the most catastrophic failures in modern corporate history. 

This case study delves deep into the intricacies of the Boeing 737 Max program—a project that was initially designed to sustain Boeing's dominance in the narrow-body aircraft market but instead resulted in two fatal crashes, the loss of 346 lives, and an unprecedented global grounding of an entire fleet. 

Boeing's 737 series has long been a cornerstone of the company's commercial aircraft offerings. Since its inception in the late 1960s, the 737 has undergone numerous iterations, each improving upon its predecessor while maintaining the model's reputation for reliability and efficiency. 

By the 2000s, the 737 had become the best-selling commercial aircraft in history, with airlines around the world relying on its performance for short and medium-haul flights.

However, by the early 2010s, Boeing faced significant competition from Airbus, particularly with the introduction of the Airbus A320neo. The A320neo offered superior fuel efficiency and lower operating costs, thanks to its state-of-the-art engines and aerodynamic enhancements. 

In response, Boeing made the strategic decision to develop the 737 Max, an upgrade of the existing 737 platform that would incorporate similar fuel-efficient engines and other improvements to match the A320neo without necessitating extensive retraining of pilots.

Boeing's leadership was acutely aware that any requirement for significant additional training would increase costs for airlines and potentially drive them to choose Airbus instead.

The company selected the CFM International LEAP-1B engines for the 737 Max, which were larger and more fuel-efficient than those on previous 737 models. 

However, this choice introduced significant engineering challenges, particularly related to the aircraft's aerodynamics and balance.

The Maneuvering Characteristics Augmentation System (MCAS) was developed as a solution to these challenges. 

The system was designed to automatically adjust the aircraft's angle of attack in certain conditions to prevent stalling, thereby making the 737 Max handle similarly to older 737 models. This was intended to reassure airlines that their pilots could transition to the new model with minimal additional training. 

As Dennis Muilenburg, Boeing’s CEO at the time, stated, "Our goal with the 737 Max was to offer a seamless transition for our customers, ensuring they could benefit from improved efficiency without significant operational disruptions". 

The MCAS would later become central to the 737 Max's tragic failures.

Don’t let your project fail like this one!

Discover here how I can help you turn it into a success.

For a list of all my project failure case studies just click here.

Timeline of Events

2011-2013: Project Inception and Initial Development

The 737 Max project was officially launched in 2011, with Boeing announcing that the aircraft would feature new engines, improved aerodynamics, and advanced avionics. The design and development process was marked by intense pressure to meet tight deadlines and to deliver a product that could quickly enter the market. By 2013, Boeing had completed the design phase, and the first test flights were scheduled for early 2016.

2016-2017: Certification and Commercial Launch

The first test flight of the 737 Max took place in January 2016, and the aircraft performed as expected under controlled conditions. The Federal Aviation Administration (FAA) granted the 737 Max its certification in March 2017, allowing it to enter commercial service. The aircraft was initially well-received by airlines, with thousands of orders placed within the first year of its launch.

October 29, 2018: Lion Air Flight JT610 Crash

Lion Air Flight JT610, a Boeing 737 Max traveling in Indonesia from Jakarta to Pangkal Pinang, crashes, killing all 189 passengers and crew on board. Questions quickly emerge over previous control problems related to the aircraft’s MCAS. This marks the first major incident involving the 737 Max, and it raises significant concerns about the safety of the aircraft.

March 1, 2019: Boeing’s Share Price Peaks

Boeing’s share price reaches $446, an all-time record, after the company reports $100 billion in annual revenues for the first time. This reflects investor confidence in Boeing’s financial performance, despite the recent Lion Air crash.

March 10, 2019: Ethiopian Airlines Flight ET302 Crash

Ethiopian Airlines Flight ET302, another Boeing 737 Max, crashes shortly after takeoff from Addis Ababa, Ethiopia, killing all 157 people on board. The circumstances of this crash are eerily similar to the Lion Air disaster, with the MCAS system again suspected to be a contributing factor. The crash leads to global scrutiny of the 737 Max’s safety.

March 14, 2019: Global Grounding of the 737 Max

U.S. President Donald Trump grounds the entire 737 Max fleet, following the lead of regulators in several other countries. This grounding is unprecedented in its scope, affecting airlines worldwide and marking a significant turning point in the crisis surrounding the 737 Max.

October 29, 2019: Muilenburg Testifies Before Congress

Boeing CEO Dennis Muilenburg is accused of supplying “flying coffins” to airlines during angry questioning by U.S. senators. His testimony is widely criticized, and his handling of the crisis further erodes confidence in Boeing’s leadership.

December 23, 2019: Muilenburg Fired

Boeing fires Dennis Muilenburg, appointing Chairman Dave Calhoun as the new Chief Executive Officer. This leadership change is seen as an attempt to restore confidence in Boeing and address the mounting crisis.

March 6, 2020: U.S. Congressional Report

A U.S. congressional report blames Boeing and regulators for the “tragic and avoidable” 737 Max crashes. The report highlights numerous failures in the design, certification, and regulatory oversight processes, and it calls for significant reforms in the aviation industry.

March 11, 2020: Boeing Borrows $14 Billion

Boeing borrows $14 billion from U.S. banks to navigate the financial strain caused by the grounding of the 737 Max and the emerging COVID-19 pandemic. This loan is later supplemented by another $25 billion in debt, underscoring the financial challenges Boeing faces.

March 18, 2020: Boeing Shares Plummet

Boeing shares hit $89, the lowest since early 2013, reflecting investor concerns about the company’s future amid the 737 Max crisis and the impact of the COVID-19 pandemic on global air travel.

April 29, 2020: Job Cuts Announced

Boeing announces the first wave of job cuts, planning to reduce its workforce by 10% in response to the pandemic-induced drop in air travel. This move is part of broader efforts to cut costs and stabilize the company’s finances.

September 2020: Manufacturing Flaws in the 787 Dreamliner

Manufacturing flaws are discovered in Boeing’s 787 Dreamliner, leading to the grounding of some jets. This adds to Boeing’s mounting challenges and further complicates its efforts to recover from the 737 Max crisis.

November 18, 2020: U.S. Regulator Approves 737 Max for Flight

The U.S. Federal Aviation Administration approves some 737 Max planes to fly again after Boeing implements necessary design and software changes. This marks a significant step in Boeing’s efforts to return the 737 Max to service.

January 8, 2021: Boeing Pays $2.5 Billion Settlement

Boeing agrees to pay $2.5 billion to resolve a criminal charge of misleading federal aviation regulators over the 737 Max. This settlement includes compensation for victims’ families, penalties, and payments to airlines affected by the grounding.

November 11, 2021: Boeing Admits Responsibility

Boeing admits full responsibility for the second Max crash in a legal agreement with victims’ families. This admission marks a significant acknowledgment of the company’s failures in the development and certification of the 737 Max.

What Went Wrong?

Flawed Engineering and Design Decisions

One of the most significant factors contributing to the failure of the 737 Max was the flawed design of the MCAS system. Boeing engineers decided to rely on a single AOA sensor to provide input to the MCAS, despite the known risks of sensor failure. 

Traditionally, critical systems in aircraft design incorporate redundancy to ensure that a single point of failure does not lead to catastrophic consequences. 

Boeing's decision to omit this redundancy was driven by the desire to avoid triggering additional pilot training requirements, which would have undermined the 737 Max's cost advantage.

The placement of the new, larger engines also altered the aircraft's aerodynamic profile, making it more prone to nose-up tendencies during certain flight conditions. 

Instead of addressing this issue through structural changes to the aircraft, Boeing chose to implement the MCAS as a software solution. This decision, while expedient, introduced new risks that were not fully appreciated at the time. 

"We were under immense pressure to deliver the Max on time and under budget, and this led to some compromises that, in hindsight, were catastrophic," admitted a senior Boeing engineer involved in the project

Inadequate Regulatory Oversight

The FAA's role in the 737 Max disaster has been widely criticized. The agency allowed Boeing to conduct much of the certification process itself, including the evaluation of the MCAS system. This arrangement, known as Organization Designation Authorization (ODA), was intended to streamline the certification process, but it also created a conflict of interest. 

Boeing's engineers were under pressure to downplay the significance of the MCAS in order to avoid additional scrutiny from regulators. 

"The relationship between the FAA and Boeing became too cozy, and this eroded the regulatory oversight that is supposed to keep the public safe," said Peter DeFazio, Chairman of the House Transportation and Infrastructure Committee

Corporate Culture and Leadership Failures

At the heart of the 737 Max crisis was a corporate culture that prioritized profitability and market share over safety and transparency. 

Under the leadership of Dennis Muilenburg, Boeing was focused on delivering shareholder value, often at the expense of other considerations. This led to a culture where concerns about safety were dismissed or ignored, and where employees felt pressured to meet unrealistic deadlines. 

Muilenburg's public statements after the crashes, where he repeatedly defended the safety of the 737 Max despite mounting evidence to the contrary, only further eroded trust in Boeing. 

"There was a disconnect between the engineers on the ground and the executives in the boardroom, and this disconnect had tragic consequences," said John Hamilton, Boeing's former chief engineer for commercial airplanes

Communication Failures

Boeing's failure to adequately communicate the existence and functionality of the MCAS system to airlines and pilots was a critical factor in the two crashes. Pilots were not informed about the system or its potential impact on flight dynamics, which left them unprepared to handle a malfunction. 

After the Lion Air crash, Boeing issued a bulletin to airlines outlining procedures for dealing with erroneous MCAS activation, but this was seen as too little, too late. 

"It’s pretty asinine for them to put a system on an airplane and not tell the pilots who are operating it," said Captain Dennis Tajer of the Allied Pilots Association

Supply Chain and Production Pressures

The aggressive production schedule for the 737 Max also contributed to the project's failure. Boeing's management was determined to deliver the aircraft to customers as quickly as possible to fend off competition from Airbus. 

This led to a "go, go, go" mentality, where deadlines were prioritized over safety considerations. Engineers were pushed to their limits, with some reporting that they were working at double the normal pace to meet production targets. This rush to market meant that there was less time for thorough testing and validation of the MCAS system and other critical components

Moreover, Boeing's decision to keep the 737 Max's design as similar as possible to previous 737 models was driven by the desire to reduce production costs and speed up certification. This decision, however, meant that the aircraft's design was pushed to its limits, resulting in an aircraft that was more prone to instability than previous models. 

"We were trying to do too much with too little, and in the end, it cost lives," said an unnamed Boeing engineer involved in the project

Cost-Cutting Measures

Boeing's relentless focus on cost-cutting also played a significant role in the 737 Max's failure. The company made several decisions that compromised safety in order to keep costs down, such as relying on a single AOA sensor and not including an MCAS indicator light in the cockpit. 

These decisions were made in the name of reducing the cost of the aircraft and avoiding additional pilot training, which would have increased costs for airlines. However, these cost-cutting measures ultimately made the aircraft less safe and contributed to the crashes of Lion Air Flight 610 and Ethiopian Airlines Flight 302

Organizational Failures

Boeing's organizational structure also contributed to the 737 Max's failure. The company's decision to move its headquarters from Seattle to Chicago in 2001 created a physical and cultural distance between the company's leadership and its engineers. 

This move, coupled with the increasing focus on financial performance over engineering excellence, led to a breakdown in communication and decision-making within the company. Engineers felt that their concerns were not being heard by management, and decisions were made without a full understanding of the technical challenges involved. 

"There was a sense that the leadership was more focused on the stock price than on building safe airplanes," said a former Boeing engineer

How Boeing Could Have Done Things Differently?

Prioritizing Safety Over Speed

One of the most significant ways Boeing could have avoided the 737 Max disaster was by prioritizing safety over speed. The company was under intense pressure to deliver the aircraft quickly to compete with Airbus, but this focus on speed led to critical safety oversights. 

By taking more time to thoroughly test and validate the MCAS system and other components, Boeing could have identified and addressed the issues that ultimately led to the crashes. 

"In hindsight, we should have taken more time to get it right, rather than rushing to meet deadlines," said Greg Smith, Boeing's Chief Financial Officer at the time

Incorporating Redundancy in Critical Systems

Another key change Boeing could have made was to incorporate redundancy in critical systems like the MCAS. Aviation safety protocols typically require multiple layers of redundancy to ensure that a single point of failure does not lead to catastrophe. 

By relying on a single AOA sensor, Boeing violated this principle and left the aircraft vulnerable to sensor malfunctions. Including a second AOA sensor and ensuring that both sensors had to agree before the MCAS system activated could have prevented the erroneous activation of the system that caused the crashes. 

"Redundancy is a fundamental principle of aviation safety, and it's one that we should have adhered to in the design of the 737 Max," said John Hamilton, Boeing's former chief engineer for commercial airplanes

Improving Communication and Transparency

Boeing could have also improved its communication and transparency with both regulators and airlines. The company's decision to downplay the significance of the MCAS system and not include it in the aircraft's flight manuals left pilots unprepared to deal with its activation. 

By fully disclosing the system's capabilities and risks to the FAA and airlines, Boeing could have ensured that pilots were adequately trained to handle the system in the event of a malfunction. 

"Transparency is key to building trust, and we failed in that regard with the 737 Max," said Dennis Muilenburg, Boeing's CEO at the time

Strengthening Regulatory Oversight

The FAA's delegation of much of the certification process to Boeing created a conflict of interest that contributed to the 737 Max's failure. By strengthening regulatory oversight and ensuring that the FAA maintained its independence in the certification process, the agency could have identified the risks associated with the MCAS system and required Boeing to address them before the aircraft entered service. 

This would have provided an additional layer of scrutiny and ensured that safety was prioritized over speed and cost. 

"The FAA's role is to be the independent watchdog of aviation safety, and we need to ensure that it has the resources and authority to fulfill that role effectively," said Peter DeFazio, Chairman of the House Transportation and Infrastructure Committee

Fostering a Safety-First Corporate Culture

Finally, Boeing could have fostered a corporate culture that prioritized safety over profitability. The company's increasing focus on financial performance and shareholder value led to a culture where safety concerns were often dismissed or ignored. 

By emphasizing the importance of safety in its corporate values and decision-making processes, Boeing could have created an environment where engineers felt empowered to raise concerns and where those concerns were taken seriously by management. 

"Safety needs to be the top priority in everything we do, and we lost sight of that with the 737 Max," said David Calhoun, who succeeded Dennis Muilenburg as Boeing's CEO in 2020

Closing Thoughts

The Boeing 737 Max disaster is a stark reminder of the consequences of prioritizing speed and cost over safety in the aviation industry. The two crashes that claimed the lives of 346 people were not the result of a single failure but rather a series of systemic issues, including flawed engineering decisions, inadequate regulatory oversight, and a corporate culture that valued profitability over safety. 

These failures have had far-reaching consequences for Boeing, resulting in billions of dollars in losses, a damaged reputation, and a loss of trust among airlines, regulators, and the flying public.

Moving forward, it is crucial that both Boeing and the wider aviation industry learn from these mistakes. 

This means prioritizing safety above all else, ensuring that critical systems are designed with redundancy, and maintaining transparency and communication with regulators and customers. 

It also means fostering a corporate culture that values safety and empowers employees to speak up when they see potential risks.  

If I look at the "accidents" that happened to Boeing employees that have spoken up it seems to be the opposite...

Don’t let your project fail like this one!

Discover here how I can help you turn it into a success.

For a list of all my project failure case studies just click here.

Sources

> Cannon-Patron, S., Gourdet, S., Haneen, F., Medina, C., & Thompson, S. (2021). A Case Study of Management Shortcomings: Lessons from the B737-Max Aviation Accidents. 

> Larcker, D. F., & Tayan, B. (2024). Boeing 737 MAX: Organizational Failures and Competitive Pressures. Stanford Graduate School of Business. 

> Boeing Co. (2019). Investigation Report: The Design and Certification of the Boeing 737 Max. 

> FAA. (2023). Examining Risk Management Failures: The Case of the Boeing 737 MAX Program. 

> Enders, T. (2024). Airbus Approach to Safety and Innovation: A Response to the Boeing 737 MAX. 

> Muilenburg, D. (2019). Boeing’s Commitment to Safety: A Public Statement. 

> Gates, D., & Baker, M. (2019). The Inside Story of MCAS: How Boeing’s 737 MAX System Gained Power and Lost Safeguards. The Seattle Times. 

> Tajer, D. (2019). Statement on MCAS and Pilot Awareness. Allied Pilots Association.

Read more…

Thursday, August 15, 2024

Lies, Damned Lies, and Statistics

Lies, damned lies, and statistics.

"Lies, damned lies, and statistics" is a phrase describing the persuasive power of statistics to bolster weak arguments. 

It is also sometimes used to doubt the statistics used to prove an opponent's point.

Last night I watched a startup pitching and they presented a slide with some statistics showing the effectiveness of their solution to a particular problem, and the first thing that came to my mind was exactly this phrase.

In statistics, there are several techniques (sometimes referred to as "tricks") that can be used to manipulate data or present results in a way that supports a particular point of view. 

While these methods can be used for legitimate analysis, they can also be misused to mislead or deceive.  

When you validate a business case or investment opportunity you should be aware of these tricks, and that is why I collected the most common ones for you.

1. Cherry-Picking Data

Selecting only the data that supports a particular conclusion while ignoring data that contradicts it.

Example: A study might report only the time periods where a particular stock performed well, ignoring periods of poor performance.

2. P-Hacking

Manipulating data or testing multiple hypotheses until a statistically significant result is found, often by increasing the number of tests without proper correction.

Example: Running many different statistical tests on a dataset and only reporting the ones that give a p-value below 0.05.

3. Misleading Graphs

Presenting data in a graph with a misleading scale, axis manipulation, or selective data points to exaggerate or downplay trends.

Example: Using a y-axis that starts at a non-zero value to exaggerate differences between groups.

4. Overgeneralization

Drawing broad conclusions from a small or unrepresentative sample.

Example: Conducting a survey in one city and generalizing the results to the entire country.

5. Omitting the Baseline

Failing to provide a baseline or control group for comparison, making the results seem more significant than they are.

Example: Reporting that a treatment led to a 50% improvement without mentioning that a placebo led to a 45% improvement.

6. Selective Reporting of Outcomes

Reporting only positive outcomes while ignoring negative or neutral results.

Example: A drug trial that only reports the successful outcomes while ignoring cases where the drug had no effect or caused harm.

7. Data Dredging

Analyzing large volumes of data in search of any statistically significant relationship, often without a prior hypothesis.

Example: Examining multiple variables in a dataset until any two variables show a correlation, then presenting this as meaningful without further validation.

8. Ignoring Confounding Variables

Failing to account for variables that could influence the results, leading to spurious conclusions.

Example: Claiming that ice cream sales cause drowning deaths without accounting for the confounding variable of temperature (both increase during summer).

9. Manipulating the Sample Size

Choosing a sample size that is too small to detect an effect or too large, which may exaggerate the significance of minor effects.

Example: Conducting a survey with only a few participants and claiming the results are representative of the entire population.

10. Misinterpreting Statistical Significance

Confusing statistical significance with practical significance or misrepresenting what a p-value actually indicates.

Example: Claiming that a treatment is effective based on a p-value below 0.05 without discussing the actual effect size or its practical implications.

11. Simpson's Paradox

Aggregating data without considering subgroups, which can lead to contradictory conclusions when the data is disaggregated.

Example: A treatment might seem effective in the overall population but ineffective or even harmful when broken down by specific demographic groups.

12. Non-Comparative Metrics

Presenting data without proper context, such as not comparing it to a relevant benchmark.

Example: Reporting that a company’s profits increased by 20% without mentioning that its competitors increased by 50%.

13. Double Dipping

Using the same data twice in a way that inflates the significance of the findings.

Example: Reporting an outcome as both a primary and secondary result, thus artificially increasing the perceived importance of the data.

14. Using Relative vs. Absolute Risk

Emphasizing relative risk instead of absolute risk to make a finding seem more significant.

Example: Saying a drug reduces the risk of disease by 50% (relative risk) when the absolute risk reduction is from 2% to 1%.

These techniques can be powerful when used correctly, but they can also be deceptive if not used with care and transparency. 

Ethical statistical practice involves full disclosure of methods, careful interpretation of results, and avoiding the intentional misuse of these tricks.


Read more…

Tuesday, August 13, 2024

A Step-by-Step Guide to Business Case Validation

A Step-by-Step Guide to Business Case Validation

Creating a business case is a systematic process designed to justify a proposed project, investment, or decision within a business context. 

A strong business case typically includes an introduction with background information, a clear problem or opportunity statement, a detailed analysis of options, a risk assessment, a financial analysis, a proposed solution, and a high-level implementation plan.

But validating your business case is just as important as creating it. 

The validation process is essential for confirming that the proposed initiative is likely to achieve its intended outcomes and align with organizational goals.

I have validated many business cases, both for my clients and as an active angel investor, and if there is one thing I have learned, it is the critical importance of ensuring that a business case is both robust and realistic before committing significant resources. 

Over the years I have developed a structured approach that I want to share with you.

1) Review the Problem Statement or Opportunity

Clarity and Accuracy: Ensure the problem or opportunity is clearly articulated and well understood. Question whether the impact of not addressing the problem or missing the opportunity is accurately presented.

See my article "Understanding Your Problem Is Half the Solution (Actually the Most Important Half)" for some further reading on this topic.

2) Scrutinize Assumptions

Identify and Test Assumptions: List and validate assumptions related to market conditions, customer behavior, cost estimates, and revenue projections. Compare them with historical data and industry benchmarks to ensure they are realistic.

Scenario Analysis: Conduct best-case, worst-case, and most likely scenarios to test the sensitivity of the business case to changes in key assumptions.

3) Evaluate the Analysis of Options

Comprehensive Consideration: Ensure all reasonable options, including doing nothing, have been considered. 

Verify Estimates and Projections: Ensure cost estimates are accurate and comprehensive, and validate revenue projections against market data and trends. Recalculate ROI and perform sensitivity analyses to assess the impact of changes in key variables.

Focus on Economic Benefits: In my opinion ALL benefits of a technology project should be expressed in dollars (or any other currency). To make estimating the benefits of a project easier and more realistic, I use a simple model to assess the economic benefits of a project. It consists of five benefit types (or buckets); Increased Revenue, Protected Revenue, Reduced Costs, Avoided Costs, and Positive Impacts.

Total Cost of Ownership (TCO): TCO is an analysis meant to uncover all the lifetime costs that follow from owning a solution. As a result, TCO is sometimes called 'life cycle cost analysis.' Never just look at the implementation or acquisition costs. Always consider TCO when looking at the costs of a solution. 

Time Value of Money: The time to value (TTV) measures the length of time necessary to finish a project and start the realization of the benefits of the project. One project valuation method incorporating this concept is the payback period (PB). There is one problem with the payback period: It ignores the time value of money (TVM). That is why some project valuation methods include the TVM aspect. For example, internal rate of return (IRR) and net present value (NPV).

Unbiased Evaluation: Check if the criteria for evaluating options are relevant and unbiased, and consider whether alternative criteria might lead to different recommendations.

For more details on the financial valuations of your options have a look at my eBook The Project Valuation Model ™. You can download it for free here.

4) Examine the Proposed Solution

Feasibility: Assess whether the proposed solution is technically, financially, and operationally feasible, with realistic timelines.

Strategic Alignment: Verify that the solution aligns with the organization's broader strategic goals and represents the best value. 

See my article "Do Your Projects and Initiatives Support Your Strategy?" for some further reading on the topic.

5) Engage Stakeholders

Involvement and Feedback: Engage key stakeholders, including executives and subject matter experts, to gather feedback and address concerns. Their support is critical to the project's success.

See my article "10 Principles of Stakeholder Engagement" for some further reading on the topic.

6) Perform a Risk Assessment

Comprehensive Risk Analysis: Review the risk assessment to ensure all significant risks are identified and properly analyzed. Evaluate the feasibility of risk mitigation strategies and ensure contingency plans are in place.

See my article "Risk Management Is Project Management for Adults" for some further reading on the topic.

7) Review Legal, Regulatory, and Ethical Considerations

Compliance and Ethics: Ensure the project complies with all relevant laws, regulations, and industry standards. Consider any environmental, social, and ethical implications.

8) Assess Market and Competitive Analysis

Market and Competitive Validation: Reassess market conditions and competitive responses to ensure the business case remains relevant and viable in the current environment.

9) Evaluate Implementation Feasibility

Resource and Timeline Viability: Confirm that the necessary resources are available and that the proposed timeline is realistic. Consider conducting a pilot to validate key aspects of the business case.

Opportunity Cost: If you implement the proposed solution, what other initiatives can't you do? Is it still worth it?

Cost of Delay: What does it cost me if I do the project slower or later? Is there urgency?

For more details on the opportunity costs, and cost of delay of your initiative have a look at my eBook The Project Valuation Model ™. You can download it for free here.

10) Seek Third-Party Review

External Validation: Consider an independent review by a third-party expert to provide objective insights and increase the credibility of the business case. 

See for example my Independent Business Case Review service.

11) Final Review

Final Review: Ensure all sections of the business case are complete, coherent, and consistent. Revise as necessary based on the validation process.

Best Practices

Documentation: Keep a detailed record of validation steps, findings, and any revisions made to create a clear audit trail.

Stakeholder Engagement: Maintain clarity and avoid jargon to ensure understanding and buy-in from all stakeholders.

Data-Driven Analysis: Base your analysis and recommendations on solid data and evidence.

Constructive Approach: Focus on strengthening the business case rather than undermining it, using challenges to ensure the best possible outcome.

In a nutshell: Effective validation ensures that any weaknesses in the business case are addressed before committing significant resources, thereby reducing the risk of failure and increasing the likelihood of success.

If you are an executive sponsor, steering committee member, or a non-executive board member and want an unbiased expert view on your business case? Then my Independent Business Case Review is what you are looking for.

Read more…